Search Results: "acute"

16 August 2013

Martín Ferrari: Impostor syndrome

Do you feel like an impostor? Do you fear other people realising that you are not as good as your peers? You are not alone, many people feel the same, and it goes away eventually. I know because I've been there, it is a really awful feeling that got me depressed and unhappy for months. But you can try to fight it! This is an excellent document from the Geek Feminism Wiki, that I found thanks to pabs@, make yourself a favour and read it: Impostor syndrome.

13 August 2013

Martín Ferrari: DebConf 13

On Sunday I arrived to DebConf 13. It has been so much fun that I didn't have the time to post anything about it! As usual, I really enjoy meeting old friends and putting faces to nick names. Last night the Cheese and Wine party was once again great. Not everything has been partying, though. I've been discussing with Enrico ideas for recognising Debian Contributors, as he presented on his talk on Sunday. We still have to discuss further, and obviously, sit down and write a lot of code :-) Yesterday we also met with Luk, and discussed what to do with the ancient net-tools package. We had had the idea of writing compatibility wrappers using iproute2, but that turned out to be too complicated and brittle. After looking a the current state of net-tools, and its reverse dependencies, we decided that the best way to go is to deprecate it by asking rdepends to migrate to iproute2 (for most of them it should be trivial), and then downgrade net-tools to optional. It won't be removed from the archive, as people will still want it, but it will not be required by any core functionality any more. In the next few days, we will be sending an email to debian-devel, and filling about 80 bugs to get rid of the dependency on net-tools, many with patches.

11 August 2013

Lisandro Damián Nicanor Pérez Meyer: Qt in Debian: using Qt4 and/or Qt5 in your packages

Hi everyone! We now have both Qt4 and Qt5 in the archive. Those using Qt4 should not need to make any changes in their packages, although you can be extra-safe with a few steps. Don't rush, just read below.
Some backgroundSune took the time some months ago to consult upstream for a sane way to allow both SDKs to coexist without us distros having to reinvent the wheel choosing which tools have to be in use in each case.

After a long discussion, upstream decided to write qtchooser (already in the archive) to be able to select between Qt4, Qt5 and even special user's cases like cross-platform builds.

So instead of going trough Debian's alternatives as we did with Qt3/Qt4, we will make use of this new tool.
My package uses Qt, how should I proceed?There are many ways of choosing either of the versions of Qt:

- Using any qtchooser method (preferred):

* Exporting QT_SELECT with 4, qt4, 5 or qt5 as a value in debian/rules.
* Call the tool using the '-qtx' parameter, where x can be replaced with any of the options above.

- Build-depending on qt4-default or qt5-default. You can't B-D on both of them, as they can't coexist.

It is good to notice that:

- any qtchooser method will take precedence over build depending on qtX-default.
- If you export XDG_CONFIG_DIRS it will ignore the default paths to qtchooser's configs we setted up in the packages.

We have also provided qt4-[arch-triplet] and qt5-[arch-triplet] options for special cases.

Once again, if you are already using Qt4, there is no need to rush. See below.

Can is use both Qt4 and Qt5 in my package?You can't mix Qt4 and Qt5 in the same binary, but you may provide libraries compiled against one or the other. For example, your source package foo could provide both libqt4foo1 and libqt5foo1. You need to mangle your debian/rules and/or build system accordingly to achieve this. At the time of this writing I don't know of any examples yet.

So are you going to break the archive with a big transition?No, we have done our best to avoid having to make any changes to existing Qt4 packages. Qt tools should default to Qt4 except if overriden by any method described above.

My package uses Qt4, can I leave it as it is?While there is no need to apply the changes in this case, explicitly setting the Qt version will surely not hurt at all. But don't rush ;-)

4 August 2013

Ondřej Čertík: How to support both Python 2 and 3

I'll start with the conclusion: making backwards incompatible version of a language is a terrible idea, and it was bad a mistake. This mistake was somewhat corrected over the years by eventually adding features to both Python 2.7 and 3.3 that actually allow to run a single code base on both Python versions --- which, as I show below, was discouraged by both Guido and the official Python documents (though the latest docs mention it)... Nevertheless, a single code base fixes pretty much all the problems and it actually is fun to use Python again. The rest of this post explains my conclusion in great detail. My hope is that it will be useful to other Python projects to provide tips and examples how to support both Python 2 and 3, as well as to future language designers to keep languages backwards compatible. When Python 3.x got released, it was pretty much a new language, backwards incompatible with Python 2.x, as it was not possible to run the same source code in both versions. I was extremely unhappy about this situation, because I simply didn't have time to port all my Python code to a new language. I read the official documentation about how the transition should be done, quoting:
You should have excellent unit tests with close to full coverage.
  1. Port your project to Python 2.6.
  2. Turn on the Py3k warnings mode.
  3. Test and edit until no warnings remain.
  4. Use the 2to3 tool to convert this source code to 3.0 syntax. Do not manually edit the output!
  5. Test the converted source code under 3.0.
  6. If problems are found, make corrections to the 2.6 version of the source code and go back to step 3.
  7. When it's time to release, release separate 2.6 and 3.0 tarballs (or whatever archive form you use for releases).
I've also read Guido's blog post, which repeats the above list and adds an encouraging comment:
Python 3.0 will break backwards compatibility. Totally. We're not even aiming for a specific common subset.
In other words, one has to maintain a Python 2.x code base, then run 2to3 tool to get it converted. If you want to develop using Python 3.x, you can't, because all code must be developed using 2.x. As to the actual porting, Guido says in the above post:
If the conversion tool and the forward compatibility features in Python 2.6 work out as expected, steps (2) through (6) should not take much more effort than the typical transition from Python 2.x to 2.(x+1).
So sometime in 2010 or 2011 I started porting SymPy, which is now a pretty large code base (sloccount says over 230,000 lines of code, and in January 2010 it said almost 170,000 lines). I remember spending a few full days on it, and I just gave up, because it wasn't just changing a few things, but pretty fundamental things inside the code base, and one cannot just do it half-way, one has to get all the way through and then polish it up. We ended up using one full Google Summer of Code project for it, you can read the final report. I should mention that we use metaclasses and other things, that make such porting harder. Conclusion: this was definitely not "the typical transition from Python 2.x to 2.(x+1)". Ok, after months of hard work by a lot of people, we finally have a Python 2.x code base that can be translated using the 2to3 tool and it works and tests pass in Python 3.x. The next problem is that Python 3.x is pretty much like a ghetto -- you can use it as a user, but you can't develop in it. The 2to3 translation takes over 5 minutes on my laptop, so any interactivity is gone. It is true that the tool can cache results, so the next pass is somewhat faster, but in practice this still turns out to be much much worse than any compilation of C or Fortran programs (done for example with cmake), both in terms of time and in terms of robustness. And I am not even talking about pip issues or setup.py issues regarding calling 2to3. What a big mess... Programming should be fun, but this is not fun. I'll be honest, this situation killed a lot of my enthusiasm for Python as a platform. I learned modern Fortran in the meantime and with admiration I noticed that it still compiles old F77 programs without modification and I even managed to compile a 40 year old pre-F77 code with just minimal modifications (I had to port the code to F77). Yet modern Fortran is pretty much a completely different language, with all the fancy features that one would want. Together with my colleagues I created a fortran90.org website, where you can compare Python/NumPy side by side with modern Fortran, it's pretty much 1:1 translation and a similar syntax (for numerical code), except that you need to add types of course. Yet Fortran is fully backwards compatible. What a pleasure to work with! Fast forward to last week. A heroic effort by Sean Vig who ported SymPy to single code base (#2318) was merged. Earlier this year similar pull requests by other people have converted NumPy (#3178, #3191, #3201, #3202, #3203, #3205, #3208, #3216, #3223, #3226, #3227, #3231, #3232, #3235, #3236, #3237, #3238, #3241, #3242, #3244, #3245, #3248, #3249, #3257, #3266, #3281, #3191, ...) and SciPy (#397) codes as well. Now all these projects have just one code base and it works in all Python versions (2.x and 3.x) without the need to call the 2to3 tool. Having a single code base, programming in Python is fun again. You can choose any Python version, be it 2.x or 3.x, and simply submit a patch. The patch is then tested using Travis-CI, so that it works in all Python versions. Installation has been simplified (no need to call any 2to3 tools and no more hacks to get setup.py working). In other words, this is how it should be, that you write your code once, and you can use any supported language version to run it/compile it, or develop in. But for some reason, this obvious solution has been discouraged by Guido and other Python documents, as seen above. I just looked up the latest official Python docs, and that one is not upfront negative about a single code base. But it still does not recommend this approach as the one. So let me fix that: I do recommend a single code base as the solution. The newest Python documentation from the last paragraph also mentions
Regardless of which approach you choose, porting is not as hard or time-consuming as you might initially think.
Well, I encourage you to browse through the pull requests that I linked to above for SymPy, NumPy or SciPy. I think it is very time consuming, and that's just converting from 2to3 to single code base, which is the easy part. The hard part was to actually get SymPy to work with Python 3 (as I discussed above, that took couple months of hard work), and I am pretty sure it was pretty hard to port NumPy and SciPy as well. The docs also says:
It /single code base/ does lead to code that is not entirely idiomatic Python
That is true, but our experience has been, that with every Python version that we drop, we also delete lots of ugly hacks from our code base. This has been true for dropping support for 2.3, 2.4 and 2.5, and I expect it will also be true for dropping 2.6 and especially 2.7, when we can simply use the Python 3.x syntax. So not a big deal overall. To sum this blog post up, as far as I am concerned, pretty much all the problems with supporting Python 2.x and 3.x are fixed by having a single code base. You can read the pull requests above to see how to implemented things (like metaclasses, and other fancy stuff...). Python is still quite the same language, you write your code, you use a Python version of your choice and things will just work. Not a big deal overall. The official documentation should be fixed to recommend this approach, and deprecate the other approaches. I think that Python is great and I hope it will be used more in the future.
Written with StackEdit.

24 July 2013

Martín Ferrari: Of Grafton street and Hanbury lane

It finally happened, I'm at the boarding gate 423, about to get into the plane that will take me out of Dublin. It seems I didn't get convinced by this article. Still, I've tried to see most of the things on that list, and see a fair bit of the island. I am not sad, I was expecting myself to break down and make a big Greek drama, but nothing like that happened. In the end, Dublin is the one place (with Buenos Aires) where I'll be coming back often. Still, this song has been in my head for days.
<object data="http://www.youtube.com/v/aMxKggsz0fM&amp;fs=1&amp;border=1" height="364" width="445"> <param name="allowFullScreen" value="true"> <param name="allowscriptaccess" value="always"> <param name="movie" value="http://www.youtube.com/v/aMxKggsz0fM&amp;fs=1&amp;border=1"> </object>

14 July 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: My experiences with KMail2 in Debian

Thanks to the Qt/KDE team, specially to Maxy who has done most of the packaging and uploading, sid users are now enjoying KDE 4.10.5, including the new KDE PIM stuff that we have held out for the Wheezy release.

I started using KMail2 (inside Kontact) a few days after Wheezy's release, getting it from experimental. And I have to admit that I really like it, just like with KMail1.

But my upgrade did have some bumps on the road, so I'm sharing them here so you can now how I solved them.

Mail import worked as we were waiting: it did work. So it was really useful to hold back Kmail1 until this really worked.

Now, I had a problem with my hard disk: whenever KMail started, it would start accessing it without pause. There where two reasons (for what I could test, I haven't looked at the source to really see if there was some other oddity) for this: I had a nepomuk/virtuoso DB created quite some time ago and initial mail indexing.

The initial mail indexing takes lots of time. For 1GB of DIMAP I had to wait like 5 hours (yes, 5 hours) on a 5600 rpm disk to let it fully finish. My desktop machine, with a faster hard drive, took a little less.

As far as people told me, that should have been enough, but my disk kept crawling. So I remembered someone from the team saying something about people with early-created nepomuk/virtuoso databases will have some speed issues. Mine where more than that, buy trying was worth the shot.

I had nepomuk disabled since I tried it the first version due to this exact problem. So I closed my KDE session and removed the nepomuk/virtuoso data:

rm -r ~/.kde/share/apps/nepomuk/

Then I logged back in KDE and waited (again) the 5 hours to let nepomuk re index my mail, this time totally finishing after 5 hours. Starting from that point, I get some one or two minutes of disk trashing some times I log in (not always), but it's actually not that bad. And I heard that in KDE 4.11 this has been improved a lot, so I should see a better behavior from that point on.

Please understand that this was my trial-and-error fix, it may be possible that someone comes with a better solution :-)

2 July 2013

Ond&#345;ej &#268;ert&iacute;k: My impressions from the SciPy 2013 conference

I have attended the SciPy 2013 conference in Austin, Texas. Here are my impressions.

Number one is the fact that the IPython notebook was used by pretty much everyone. I use it a lot myself, but I didn't realize how ubiquitous it has become. It is quickly becoming the standard now. The IPython notebook is using Markdown and in fact it is better than Rest. The way to remember the "[]()" syntax for links is that in regular text you put links into () parentheses, so you do the same in Markdown, and append [] for the text of the link. The other way to remember is that [] feel more serious and thus are used for the text of the link. I stressed several times to +Fernando Perez and +Brian Granger how awesome it would be to have interactive widgets in the notebook. Fortunately that was pretty much preaching to the choir, as that's one of the first things they plan to implement good foundations for and I just can't wait to use that.

It is now clear, that the IPython notebook is the way to store computations that I want to share with other people, or to use it as a "lab notebook" for myself, so that I can remember what exactly I did to obtain the results (for example how exactly I obtained some figures from raw data). In other words --- instead of having sets of scripts and manual bash commands that have to be executed in particular order to do what I want, just use IPython notebook and put everything in there.

Number two is that how big the conference has become since the last time I attended (couple years ago), yet it still has the friendly feeling. Unfortunately, I had to miss a lot of talks, due to scheduling conflicts (there were three parallel sessions), so I look forward to seeing them on video.

+Aaron Meurer and I have done the SymPy tutorial (see the link for videos and other tutorial materials). It's been nice to finally meet +Matthew Rocklin (very active SymPy contributor) in person. He also had an interesting presentation
about symbolic matrices + Lapack code generation. +Jason Moore presented PyDy.
It's been a great pleasure for us to invite +David Li (still a high school student) to attend the conference and give a presentation about his work on sympygamma.com and live.sympy.org.

It was nice to meet the Julia guys, +Jeff Bezanson and +Stefan Karpinski. I contributed the Fortran benchmarks on the Julia's website some time ago, but I had the feeling that a lot of them are quite artificial and not very meaningful. I think Jeff and Stefan confirmed my feeling. Julia seems to have quite interesting type system and multiple dispatch, that SymPy should learn from.

I met the VTK guys +Matthew McCormick and +Pat Marion. One of the keynotes was given by +Will Schroeder from Kitware about publishing. I remember him stressing to manage dependencies well as well as to use BSD like license (as opposed to viral licenses like GPL or LGPL). That opensource has pretty much won (i.e. it is now clear that that is the way to go).

I had great discussions with +Francesc Alted, +Andy Terrel, +Brett Murphy, +Jonathan Rocher, +Eric Jones, +Travis Oliphant, +Mark Wiebe, +Ilan Schnell, +St fan van der Walt, +David Cournapeau, +Anthony Scopatz, +Paul Ivanov, +Michael Droettboom, +Wes McKinney, +Jake Vanderplas, +Kurt Smith, +Aron Ahmadia, +Kyle Mandli, +Benjamin Root and others.


It's also been nice to have a chat with +Jason Vertrees and other guys from Schr dinger.

One other thing that I realized last week at the conference is that pretty much everyone agreed on the fact that NumPy should act as the default way to represent memory (no matter if the array was created in Fortran or other code) and allow manipulations on it. Faster libraries like Blaze or ODIN should then hook themselves up into NumPy using multiple dispatch. Also SymPy would then hook itself up so that it can be used with array operations natively. Currently SymPy does work with NumPy (see our tests for some examples what works), but the solution is a bit fragile (it is not possible to override NumPy behavior, but because NumPy supports general objects, we simply give it SymPy objects and things mostly work).

Similar to this, I would like to create multiple dispatch in SymPy core itself, so that other (faster) libraries for symbolic manipulation can hook themselves up, so that their own (faster) multiplication, expansion or series expansion would get called instead of the SymPy default one implemented in pure Python.

Other blog posts from the conference:

29 May 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Presenting qtchooser

A few days ago we the Qt/KDE team uploaded a new tool in the Qt ecosystem: qtchooser.

This tool is a wrapper tool used to select between different Qt versions. Of course, the first and easiest example is choosing between Qt4 and Qt5. But it doesn't end there: it can also be used to select a user's build of Qt.

To experienced Debian users, at first sight, it might resemble Debian's alternatives system. But it goes much further than that, allowing users (not sysadmins) to decide their defaults, and even adding new ones, user-wide. All this can be done using different methods like command line arguments, environment variables and configuration files.

Apart from all that, this is the recommended upstream way of managing Qt, being picked up by several (if not all) distributions, so it can easily be supported by upstream in their documentation.

My Qt4 package in Debian does not uses qtchooser, do I need to change anything?No, we have tried to make things as smooth as possible. Your Qt4 packages should be safe. There will be more info on this later.

22 May 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Debian/Ubuntu packages caching and mobile workstations

Not so long ago I read Dmitrijs' blog post on how to configure apt-cacher-ng to advertise it's service using avahi. As I normally use my laptop in my home and at work, and both networks have apt-cacher-ng running, I decided to give it a try.

I have been administering apt-cacher-ng for three networks so far, and I really find it a useful tool. Then, thanks to the aforementioned blog post, I discovered squid-deb-proxy. I don't use squid, so it's not for my normal use case, but some people will surely find it interesting.

But I found it's client package to be really interesting. It will discover any service providing _apt_proxy._tcp through avahi and let apt use it. But then the package wasn't available in Debian. So, I contacted Michael Vogt to see if he was interested in putting at least the client in Debian's archive. He took the opportunity to upload the full squid-deb-proxy, so thanks a lot Michael :-)

I then filled a wishlist bug against apt-cacher-ng to provide the avahi configuration for publishing the service, which Eduard included in the last version of it. So thanks a lot Eduard too!

tl;dr
You know only need apt-cacher-ng >= 0.7.13-1 and avahi-daemon installed on your server and your mobile users just need squid-deb-proxy-client. Then the proxy autoconfiguration for apt will just work.

One again, thanks a lot to the respective maintainers for allowing this into Jessie :-)

Gotchas
Yes, there are still some rough edges. On one of the networks I'm behind a proxy. While configuring my machine to use apt-cacher-ng's service as a proxy trough apt.conf, apt-listbugs would just work. But now, using the service as discovered by squid-deb-proxy-client, apt-listbugs just times out. Maybe I need to fill some other bug yet...

15 May 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Qt 4.8.4 in experimental.

Since a few days we have Qt 4.8.4 (4:4.8.4+dfsg-3) in some archs of the experimental Debian archive. This release allows Qt4 to coexist with Qt5 while avoiding FTBFSs of current Qt4 packages in the archive.

So if you maintain a Qt4 app and want to check how it works with 4.8.4, you should be ready to go.

14 May 2013

Mart&iacute;n Ferrari: A new life

A week ago, I made the big step and presented my resignation letter at Google. It was not an easy decision, to leave a good job to pursue a blurry plan that sounds a bit infeasible, but I feel this what I want to do: it's a dream becoming reality. After the 31st of May, I will become self-employed, working as a freelancer, while travelling around the world. I plan to live with a small budget, working with my laptop from wherever I am, instead of stressing about getting many clients to keep an expensive lifestyle. I've had the travelling bug for some time, always thinking about my next trip, leaving for the airport just after finishing work, coming back on Monday and going directly to the office. You end up wishing for more vacation days all the time (and I had a fair amount of them). Now, for different reasons I want to spend some time in my old house in Nice, and in Argentina. There was no way I could do that with my current job, and that was the trigger for my decision. After that, I will come back to Ireland, just to think where my next destination will be. I know this is going to be a great experience, we'll see how well it works! If you think you -or your employer- might need my services, I'd be more than happy to talk! I'll be concentrating on the kind of work I've been doing at Google and before: finding creative solutions for difficult problems, be it systems administration, or (systems) programming. Think of hiring an SRE for just a few hours or days.

4 May 2013

Russ Allbery: Review: Democracy at Work

Review: Democracy at Work, by Richard Wolff
Publisher: Haymarket
Copyright: 2012
ISBN: 1-60846-247-1
Format: Kindle
Pages: 220
I've been reading (mostly on-line) and thinking quite a bit lately about workplace governance models, economic structure, and why the current organization of the US workplace bothers me so intensely, partly triggered by reading John Kenneth Galbraith's The Affluent Society. The economic monoculture has made that process particularly frustrating. It's rare to find a discussion, even in the context of organizational strategies that are considered radical, that avoids the standard frame of productivity and business value. Most discussion is long on tactics and short on strategy and examination of goals. Wolff's appearance on Moyers & Company was a rare breath of fresh air, enough so that I grabbed his book shortly afterwards. Wolff is a Marxian economist (meaning that he makes use of Marxist analysis of economics and capitalism while separating them from Marxist politics and Marx's advocacy of revolutionary socialism), which for me was part of the interest. Marxist thought of any branch is not common in the United States; we're regularly deprived of several sides in the international conversation on economic models. I was taught Marxist theory in elementary and high school (in retrospect, surprisingly even-handedly and well, despite the biases of my schooling), but none of the later developments of Marxist thought. I think that's a typical experience here; in the United States, Marxism culminated in Mao and Stalin, and no further development of the underlying theories is ever mentioned. Democracy at Work is subtitled A Cure for Capitalism and does indeed advocate a concrete alternative to capitalist business structures. But this is only the last third of the book (and in some ways the least useful, as I'll discuss in a moment). The first two-thirds of the book is basically a remedial education in modern, as opposed to historic, Marxian economics for US readers like myself who have never heard it before, cast in the context of the current financial crisis. This may well be old hat for Europeans, but if you've been wondering what (at least some) modern-day Marxists actually believe, or are saying to yourself "there are modern-day Marxists after the collapse of the Soviet Union?", I recommend this book to your attention. It's an excellent summary, which I read with the delightful feeling of an expanding viewpoint and the discovery of new directions from which to look at a problem. There's quite a bit in this section that's worth thinking about, including another take on the nature of the recent economic collapse and how that fits into a Marxian analysis of capitalist crises. But there was one point in Wolff's explanation that I found particularly helpful. He completely restructured my understanding of the Marxian analysis of worker exploitation and profit allocation. There are two angles of Marxist economic thought and socialist economics that get a great deal of attention, at least in the United States, in history and economics classes: the role (or lack thereof) of markets in price setting, and the ownership of the means of production. Defenders of capitalism like to focus on the former, since it's quite easy to identify the advantages of theoretical free markets in finding ideal prices and balancing supply and demand, whereas central planning of prices and production has resulted in some catastrophic and deadly failures. (Although I will note, with passing interest, that those failures predate large-scale computing, and there are now large corporations that manage budgets larger than some countries via centralized command-and-control economic practices.) Defenders of socialism are more likely to focus on the ownership of the means of production, since it's easy to show prima facie unfairness in owners of capital extracting vast profits without having to do any work themselves, only be lucky enough to start with large quantities of money. Wolff, however, argues that both of these focuses misses a core critique by Marx of the workplace structure in capitalism, and that, by ignoring that critique, supposedly Marxist countries did not create anything that was actually Marxist in implementation. The Soviet Union was just as much a capitalist country as the United States is. It was state capitalism rather than private capitalism, but the core capitalist structure was intact. Wolff arrives at this conclusion, which may be well-trodden ground in parts of the world that include active Marxist thought but which was quite startling to this American, by treating the ownership of capital as a partial distraction. He focuses on a more direct question and practical question: who determines what a worker does on a day-to-day basis and how that product is used? Who determines what profits are collected and how they are spent? In private capitalism, this is done by the owners of the capital: large shareholders, major investors, and the managerial class that they hire. In Soviet state capitalism, this is done by national politicians, bureaucrats, and the managerial class that they hire. In neither model is it done by the workers themselves. The Soviet model gives theoretical ownership of the capital to the workers, but that ownership is diffused, centralized, and politicized, redirected through the mechanisms of the state, and therefore is effectively ignored. Ownership and control is entirely captured by the political class. Both of these systems are capitalist in Wolff's view of Marx: there is a class of owners and managers, who control the terms and nature of work and who allocate the profits, and a class of workers, who have to do what they're told, are not paid full value for their labor, and don't have a say in how the profits their work generates are spent. At the most important level of day-to-day autonomy and empowerment, they are functionally identical. They are both equally hierarchical and exploitative; the only difference is in whether the system is controlled by rich individuals or by well-connected politicians. (And, as any study of modern politics quickly reveals, the distinction between those two groups in most countries is murky at best.) Wolff convincingly recasts modern economic history as a constant pendulum swing between private capitalism and state capitalism. Crises in one system push countries towards the other system; subsequent crises push the country back towards the first system. Regulation grows and shrinks, companies are nationalized and then privatized, but both systems are united in excluding the worker from any meaningful control over their work life. The first two-thirds of the book was full of insights like this for me. I didn't agree with all of it, but all of it was worthwhile and thought-provoking. But I was a bit leery of Wolff's proposed solution. My past experience with critics of capitalism is that the critique is often quite compelling, but the proposed solution is much less believable. And, sadly, that concern was warranted here as well. The core of Wolff's proposal is predictable but possibly sound: a restructuring of the workplace to be radically democratic. The business would be owned entirely and exclusively by the people who work for it, equally regardless of the job of the worker, and the workers would decide democratically or via elected repesentatives from among the workers how to allocate the profits of the business, what standards and business practices should be followed, and how the work is to be done. I was particularly interested to hear that this model (workers' self-directed enterprises) has apparently been successful in Spain in the form of the Mondragon Cooperative. Given all the tricky, small details that have to be resolved in an actual workplace, an existence proof is worth more than pounds of theory. Unfortunately, like a lot of proposed alternatives, Wolff's description of WSDEs is quite fuzzy and involves a lot of hand-waving. I was never able to build, from this book, a coherent and complete mental model of how such a workplace would function. Wolff tends to surround every specific in a halo of contingencies, possibilities, and alternative models, and is maddeningly nonspecific on such practical matters as how line management would work, how such a business would do financial planning or project approval, how competing interests in different parts of the organization would be balanced, and other practical governance matters that fill my work life. Maybe the answer to all of that is just "democracy," but I'm dubious. Democracy has a number of well-known flaws that I thought weren't adequately addressed. For example, democracies are often quite happy to further and reinforce existing prejudice (such as sexism or racism), and are prone to yielding control to the most charismatic. Democracies also have an informed voter problem, which seems like it would be particularly acute if democracy is going to make detailed business decisions. And, for larger organizations, control by pre-existing money could re-enter the equation via propaganda and campaigning around votes. Some of this gap in the book could be addressed via a more in-depth look at how Mondragon and any other real-life examples work, but that is sadly missing here. (I am interested enough now, though, that I'd read a good popular treatment of the history and methodology of Mondragon, although I don't think I'm up to working through an academic study.) The hierarchical, dictatorial management structure imposed by capitalism is so awful that WSDEs don't have a particularly high bar to meet to be fairer and more empowering than what we have today. The question, rather, is would they function sufficiently well that the business would be able to make effective decisions, and that's unclear to me from this book. This is, as Wolff spends some time discussing, particularly difficult when in direct competition with capitalist enterprises. This sort of endeavor will probably trade some degree of economic efficiency and raw marketplace power for improvements to fairness and empowerment, but that means it's going to require support from the surrounding society, which is a huge obstacle. I doubt there's a free lunch here: true fairness and equity also leading to improved economic efficiency in a capitalist context is a nice dream, but not horribly practical. Wolff also seems to suffer from something else I associate with Marxist thought: a preferential focus on a very narrowly-defined type of productive worker, apparently left over from Marx's original critique in the context of industrialization. Wolff inserts a very odd and quite awkward distinction in his WSDE model between workers who directly produce the product that's sold, and who in his model therefore produce the profits of the business, and all other workers in the business. He then gives special privileges to the former group to decide how much profit to return in worker compensation and how much to use for other purposes, thus making the supporting workers second-class citizens within this supposedly equal workplace. Speaking as someone who works in IT, and hence would be classified by Wolff into the supporting rather than directly productive category, I do not find this division at all convincing, and Wolff never provides a coherent explanation for why he introduced it. He only says that it's necessary for the governance of the business to not be exploitative, which seems to assume that there is a special economic role played by the workers who work directly on a saleable product. Maybe there is some analysis that could convince me of this, but, if so, it's not present in this book. It struck me as a recipe for continuing the exploitation of the most invisible and powerless workers in capitalism: janitors, groundskeepers, and other low-paid service jobs. I wish the whole book were as insightful and pointed as the first two-thirds, but alas I found the WSDE discussion to be somewhat muddled and utopian. "How do we get there from here" is always the hardest part of this type of discussion, and Wolff has no special skill in that department. But, despite that, I got a lot of fascinating ideas and new conceptual frameworks out of this book, and I'm tempted to read it again. I suspect some of this, similar to my discovery of promises and infinite streams in programming, is filling in of odd gaps in my personal education rather than a discovery of unusual, new ideas. But if you too have gotten your political education within the US capitalism ber alles bubble, this book may fill in similar gaps in your knowledge. If so, it's a very rewarding experience. If you're curious about a preview of Wolff's perspective without paying for the book, I recommend watching the first episode of Moyers & Company in which he appeared. Wolff is a clear and engaging speaker, and his interview provides a good feel for his discussion style and his general perspective. Rating: 8 out of 10

29 April 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: On the road to Qt5: declarative, graphicaleffects and svg in experimental.

Some more Qt 5 packages have entered Debian experimental:
Enjoy :-)

Mart&iacute;n Ferrari: Setting up my server: netfilter

I was going to start this series with explaining how I did the remote set-up, but instead I will share something that happened today. One of the first things you want to do when putting a server directly connected to the Internet is some filtering. You don't want to have an application listening on the network by mistake, so a simple netfilter firewall is a good way to ensure you are only accepting connections on ports you explicitly allowed. I have been a long-time user of ferm, a simple tool that will read a configuration file written in a special structured syntax, and generates iptables commands from it. I have used it successfully to build very complex firewalls in previous jobs, and it had the huge benefit of keeping your firewall description readable and easy to modify by other people. This time I thought I may go with something simpler, as I only wanted a handful of very simple netfilter rules. I looked at Shorewall, and browsed a bit a few others. But in the end I decided against them: there was the need to learn the tools' concepts about different parts of the network, or there were more slanted towards command-line commands, so your actual configuration will be some files in /var/lib, totally managed by the tool. With ferm, I just need to write a very small configuration file, which reads almost like iptables commands, and that's it. In fact, the default configuration placed by the Debian package, already did 90% of what I wanted: accept incoming SSH connections, ICMP packets, and reject everything else. I took the example IPv6 configuration from /usr/share/doc/ferm/examples/ipv6.ferm and in 10 minutes it was ready:
table filter  
    chain INPUT  
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;
        interface lo ACCEPT;
        proto icmp ACCEPT; 
        # allow IPsec
        proto udp dport 500 ACCEPT;
        proto (esp ah) ACCEPT;
        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
     
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
 
domain ip6 table filter  
    chain INPUT  
        policy DROP;
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;
        interface lo ACCEPT;
        proto ipv6-icmp ACCEPT;
        proto tcp dport ssh ACCEPT;
        proto tcp dport (http https) ACCEPT;
     
    chain OUTPUT policy ACCEPT;
    chain FORWARD policy DROP;
 
It is important to note than when doing this kind of thing on a remote machine, you want to make sure you don't get locked out by accident. My method is that before activating any dangerous change, I drop an at job to disable the firewall in a few minutes:
# echo /etc/init.d/ferm stop   at now +10min
warning: commands will be executed using /bin/sh
job 4 at Mon Apr 29 02:47:00 2013
And if everything goes well, I just remove the job:
# atrm 4
Update: As paravoid pointed out in the comments, now (read: since many years ago, but I've never noticed) ferm has a --interactive mode which will revert the changes if you get locked out, much like the screen resolution changing dialog in Gnome.
Another thing that you definitely want to do, is to have some kind of protection against the almost constant influx of brute-force attacks against SSH. Apart from the obvious PermitRootLogin=no setting, there are a couple of popular methods to stop people probing random username/password combinations (I am assuming here that you actually have sensible passwords, or no passwords at all): running SSH in a non-standard port, and the great fail2ban daemon. Since I don't like non-standard stuff, I installed fail2ban, which by default it will inspect /var/log/auth.log for SSH login failures and insert netfilter rules to block the offenders. Problem is, I don't like much how fail2ban inserts rules and chains into my very tidy netfilter configuration which I had just created. So, I added an "action" to do things my way: only create a service-related chain and insert rules there, I will call that chain from my main ferm.conf. Ferm runs early in the boot sequence, so this won't be a problem during normal operation. The only caveat is that after changing a configuration in ferm, I need to restart fail2ban so it will recreate the netfilter chains and rules, which were wiped by ferm. This is my configuration, note that I am ignoring the port and protocol: the whole IP is blocked for a few minutes.
# cat /etc/fail2ban/jail.local 
[DEFAULT]
action = iptables-fixed[name=%(__name__)s]
# cat /etc/fail2ban/action.d/iptables-fixed.conf
[Definition]
actionstart = iptables -N fail2ban-<name>
              iptables -I fail2ban -j fail2ban-<name>
actionstop = iptables -D fail2ban -j fail2ban-<name>
             iptables -F fail2ban-<name>
             iptables -X fail2ban-<name>
actioncheck = iptables -n -L   grep -q fail2ban-<name>
actionban = iptables -I fail2ban-<name> 1 -s <ip> -j DROP
actionunban = iptables -D fail2ban-<name> -s <ip> -j DROP
[Init]
name = default

28 April 2013

Mart&iacute;n Ferrari: Moving my stuff away from home

TL;DR version: I want to get rid of the small server running at home, I tell you here about the service I've chosen, and why I like it. In following posts, I'll explain how did I set it up remotely. Disclaimer: I am in now way affiliated with the companies I mention here (except for Picasa, as I am a Google employee), and don't get any bonuses for this post. I am only sharing this because I think it might be useful information for other people.
Being a frequent migrant means possessions are a burden. In my previous place of residence (in France), I originally intended to only stay for 6 months, and so I arrived with just a couple of suitcases, and in the end that was enough for me to live for almost 2 years. The last time, on the other hand, I was removing my stuff completely from Argentina. I emptied my house, gave away some stuff, send some boxes to my parents' place, and carried the rest with me. That was a lot of stuff, but since the company was paying for the relocation, it was not much of a problem. Later I realised my mistake, and knowing that my time in Ireland is limited, I started to try and get rid of stuff I don't need. I know I will just sell or give away much of my stuff when I finally leave, but there are some things that are not so easy to part with. The main one being my home server, which hosts this website, my VCS repositories, pictures, and many other things I need to have on the net. This all used to be located in a home-made PC tucked in a data centre, co-located by a friendly company. But that computer died almost 2 years ago, and so canterville became abhean, and my stuff started being hosted with my aDSL connection. It worked well for some time, but now I realised I had to revert that change. With this in mind I set off to find a cheap place to host my stuff. I had a few requirements: I don't have that many photos, nor they are too big, but these requirements made it clear that most VPS offerings were not going to work for me. For some reason I fail to understand, local storage in VPS offerings is usually prohibitively expensive. This is OK for most use cases, but not for mine. A friend of mine, with a similar use case, is a happy VPS customer. He told me his trick: he only hosts in the server low-quality versions of the pictures, and keeps the originals (and back-ups) at home. This was a great idea, but with two fatal flaws: I want to only carry around a laptop and one or two external hard drives; and I want to have back-ups that are not physically with me. I was starting to think about hosting my files in Amazon s3 or something like that, since most dedicated servers are way too expensive. But then I heard about two French companies offering dirt-cheap servers: OVH and Online.net. Both of them offered small servers for about 12 a month, cheaper than most VPS offerings! Online seems to mainly cater to the French market, and for some silly reason, they charge a 50 set-up fee to customers outside of France. OVH, on the other hand, has many local branches, including an Irish one, so I went with them. The offering is a low-cost line called Kimsufi, and the smallest one is still very decent for a personal server: Once I had paid the fee for one month, it took a while for it to be activated (their payment system is pretty bad), but it finally was enabled about 24h later. Then the real fun started. On one hand, I was happy to see a wide selections of operating systems to choose from, including Debian stable and testing, and a web console with many functionalities, including some basic monitoring; but on the other hand, I realised that the installed image was not pristine, the online docs are not very good, and the web application is a bit buggy and really awkward to navigate. Having sub-par docs is not something I would usually care much about, but it made it a bit more difficult to me to understand some of the very cool functionalities their system offers (more on that in a bit), and more importantly, it made it clear to me that I won't trust their image: the procedures detailed there were not exactly best-practices, and they allow themselves to log-in as root into my server. I want to describe here what I think are their most interesting features, that made it possible to me to do risky operations, like encrypting the root partition, and setting up a firewall; and being able to fix problems that would usually require physical access. These are found in their web console: a hardware reset, and configurable netboot support with many offered images, including a rescue image based on Ubuntu and one that serves as a virtual KVM. (It is surprising that these servers don't have a serial console, but at least the kernel does not detect any). With these in hand, I didn't have to fear being locked out of my server for ever. Just set up a netboot image and hard-reboot the machine! Also, it made it very simple to install my system from scratch with debootstrap. The virtual KVM is a very interesting trick. It is a netboot image that runs some tests, and fires up a web browser. You get an email with the URL and a password to access it, and then you open a page that offers you what is basically a Qemu connected to a VNC server which will boot from your real hard drive. It is super slow, but that allows you to get console access to your server, which can be very handy to debug booting problems, unless it is some issue with the real hardware. It also offers the possibility of downloading an ISO image off the network and booting that, so it can be used to run a stock installer CD too. In another post I'll describe how I reinstalled my server remotely, and some of the pitfalls that I've encountered in the process.

21 April 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: On the road to Qt 5: Qt 5 base, tools, jsbackend and xmlpatterns in experimental

The first Qt 5 packages have been accepted in Debian experimental.

What's there
To start building Qt 5 apps you will need to export QT_SELECT=qt5, install the package qt5-defaults or read qtchooser's man page. Note that exporting QT_SELECT has precedence over installing qt5-defaults.

What's not there
ArchitecturesAMD64 is already there because it's the arch used by maintainers to build the packages. i386 should be following as soon as buildds catch up. Most surely ARM-based archs will be there at some point too.

Other archs will need some love. Not strange, the Qt project supports amd64, i386 and ARM, but we Debian have normally prepared patches to make it build in other archs. And yes, we try to push them upstream for everyone's benefit. So, if you are missing it in your arch, take a look. You may be the one who enables Qt 5 in it :-)

GLES2 and WaylandWe don't have GLES2 or Wayland support yet. Building it will most probably break the desktop for people using proprietary video drivers (or at least I was told so). I'll surely provide non-official packages with GLES2/Wayland enabled to allow people testing it, but not soon.

This also means that we are not currently able to split X11 and framebuffer support. But we have time to work on it :-)

Non DFSGs compliant files
If you get the original source code tarball from Debian you will notice that it has dfsg in it's name. That means that we had to remove some non DFSG compliant stuff from the original tarball, namely:

  • Every RFC.
  • Three files used for testing the build, which are made of RFCs.
  • Some fonts.

What's following
Other parts of Qt 5 are on the way. And remember, this packages would not have been possible if it weren't for the great Debian's pkg-kde team. My kudos to them.

1 April 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: On the road to Qt 5: Qt 4.8.4 and qtchooser uploaded to experimental

I have just uploaded Qt 4.8.4 to experimental. Apart from being the newest upstream release of the 4 series, it adds the basic support for coexisting with Qt 5. Because of this support it will need to go trough the NEW queue though, so we will need to be patient.

Developers will be interested in qtchooser, the tool that allows to switch between Qt 4 and 5 development apps. It has also been uploaded to experimental.

Of course, all this has been possible thanks to the efforts of the wonderful Qt/KDE team =)

Update: our efficacious FTP masters have already made Qt pass the NEW queue. Thanks a lot!

5 March 2013

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: My Debian freeze experience (so far)

This is the first freeze in which I'm involved with upload rights. And it turned out to be a quite interesting ride so far, so I thought it would be nice to write about it.

As some of you may know, I'm a part of the Qt/KDE team. Before the freeze I was mostly involved in leaf packages, with some patch here or there, nothing fancy. And then the freeze came...

Bugs in Qt

...and bugs appeared in Qt. But they didn't get solved, even if the patches were there. Due to personal reasons, the manpower in Qt/KDE land decreased below normal levels (which were already low).

I took the time to review them, apply them in a local branch, build and test the fixes. I did a Qt upload before, but it was a team-consented one. This time there was not much reaction in our IRC channel as it used to be, so I was doubting if going ahead or not. I asked Ana, my great friend and former sponsor, for an opinion on the subject, and she gave me a really important advice: the patches were looking good and there is one really big true: if something get's broken, it can be fixed with a later upload.

You might be asking yourself why I was that afraid of doing the upload. Well, when one maintains such a medular package for many users one has to be careful And I also got used that those "big ones" like Qt where normally handled by hand skilled people. Do not take me wrong here, it's not that those people where keeping them for themselves, it's knowing that one does not has the same skills nor experience as them.

But again, no one was able to upload and I had the chance and will to do another upload if needed, so off it went. That was Qt 4:4.8.2-2.

Then new experiences followed: asking a buildd maintainer for a giveback, asking the Release Team for an unblock (more on this later), etc. While sponsoring me, Ana gave me another excellent advice which I always keep in my mind:

You can't know **everything** about Debian.

And that also includes a not so technical skill: communicating with other teams. But finally we got this new version of Qt in testing. Cool :-)

Of course, new bugs appeared, and my lack of skills (and sometimes, time) where replaced by team work: Pino looking at patches and Sune contacting upstream. The eleven uploads that followed are a nice example of team work, even if I was the one who signed and did the uploads. Whoever uses Qt must know that these wonderful people (including those who are not so active nowadays like Modestas or Fathi) have done lots to bring the better to their users.Thank you guys!

Be careful, they might bite you back!

Coming back to the non-technical skills, sometimes you have to communicate with other teams in Debian. And each team is (naturally) a separate world: possibly different people, different goals, etc. Of course, we share the goal to make Debian the best experience we can, but we do not necessarily agree on the paths to achieve so.

During the freeze, there is a team that gets lots of pressure, and not by chance: the Release Team. They handle a very important task, which is to ride the freeze to get to a release. OK, that's what everyone knows. Now, one thing is knowing that and another is really understanding what does that means.

Of course I was in the first group. From the outside, communicating with the RT was a kind of "special art", and not an easy one. I have even been advised to not ask for more than one or two unblocks per weekend, as they might "bite me back". So I put on my flamesuit on and... launched reportbug release.debian.org.

Now I'm really happy to say that my experience was far from what I described above. And yes, I had the chance to even disagree on some stuff. But remember: non-technical skills, a.k.a. social skills. Once I started to know what was going on inside the RT (joining #debian-release was a big help for that) I learnt some nice tips to approach them. Please allow me to list some of them:

  • Remember: you are the maintainer of the package, they are like gatekeepers that are there to help us coordinate to do a release. But they don't maintain the code, you do that. So try to be verbose when needed, explain the changes and don't forget a nice diff. They need to understand what is going on: they can't read your mind.
  • They are human beings too: not everyday might be their best day (the same goes for you too!). And they are under the pressure of a release. Be patient, that finally pays off.
  • Does your changes seem not so clear? try to improve them.
  • The package has a lot of changes but you really feel they are needed? Try to explain that as good as you can.
  • Try to put yourself in their position: do we really want this? If in doubt, there is a nice way to know what they think: a pre-approval bug.
I want to make a stop in this last point. A pre-approval bug it's an unblock bug in which you edit the subject to add "pre-approval" in it. Easy, isn't it? It gives you the opportunity to know what the RT thinks before doing the upload. In other words: it gives you the chance to communicate and do things in the best possible way for all the parts involved.

I've have also seen pre-approval bugs that were really not needed. But to learn where the threshold of what can be directly uploaded and what deserves a pre-approval bug is you need to know the guidelines the RT gives you. Do you still have doubts? fire a pre-approval bug and try to be clear.

Of course, this are all fruits of my experience with the RT during this time. If the RT thinks different from what I'm writing here, please stand up: we are hear to listen to you and learn :-)

As a side note, I think I should file a wishlist bug to include the pre-approval bug option in reportbug. Yes, I'm lazy :-)

Summing up

Overall this was a very nice and positive experience. We are not done yet. Are we really done at some point? Let's hope not, because this is where the fun comes from :-)

14 December 2012

Mart&iacute;n Ferrari: Fairytale of New York

Things I love about Ireland, partial list. This:
<object data="http://www.youtube.com/v/j9jbdgZidu8&amp;fs=1&amp;border=1" height="364" width="445"> <param name="allowFullScreen" value="true"> <param name="allowscriptaccess" value="always"> <param name="movie" value="http://www.youtube.com/v/j9jbdgZidu8&amp;fs=1&amp;border=1"> </object>

25 November 2012

Mart&iacute;n Ferrari: MiniConf12

I'm typing this from the departure gate of the Beauvais airport, about to board my plane back home. I came to Paris for the Debian MiniConf, a/k/a another excuse to meet so many friends. Needless to say, it was great fun. I already signed and mailed all the keys from the KSP, and spent some time during the weekend finishing the transition of my blog from Blosxom to Ikiwiki. I liked Blosxom, specially the fact that it produced static pages, but I wanted to try Ikiwiki, with its elegance and so many interesting features. The combination of web-edition, backed by revision control, with a static blog finished convincing me. So, I have just changed the feed link in Planet. I hope I don't flood you! (I took the effort of adding meta tags to all my old posts so the GUID won't change). And thanks to the MiniConf team!

Next.

Previous.